feat: Add 9 new models, remove 2 obsolete, update aliases#411
feat: Add 9 new models, remove 2 obsolete, update aliases#411timeleft-- wants to merge 6 commits intoBeehiveInnovations:mainfrom
Conversation
New models: Claude Opus 4.6, Claude Sonnet 4.6, Gemini 3.1 Pro, GPT-5.4 Pro, GPT-5.3 Codex, DeepSeek V3.2, Devstral 2512, Qwen 3.5 397B, MiniMax M2.5. Removed: meta-llama/llama-3-70b (8K context, obsolete), perplexity/llama-3-sonar-large-32k-online (legacy). Generic aliases (opus, sonnet, pro, gpt5pro) migrated to newest model versions. Version-specific aliases preserved for backward compatibility (opus4.5, sonnet4.5, gemini3.0, gpt5.2-pro). Version bumped to 9.9.0. All 870 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the project's AI model registry to incorporate the latest offerings from OpenRouter. It introduces nine new, advanced models, removes two outdated ones, and intelligently reconfigures generic aliases to point to the newest versions while preserving older, version-specific aliases for seamless backward compatibility. The changes aim to enhance the system's capabilities with cutting-edge models and ensure the model configurations remain current and robust, supported by a minor version increment and thorough validation. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Keep version 9.9.0 from our feature branch (upstream synced to 9.8.2). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Code Review
This pull request adds 9 new models, removes 2 obsolete models, and updates various aliases and tests accordingly. The changes are extensive and mostly look good. However, I've found a few issues that need attention. In pyproject.toml, the required Python version is bumped to >=3.10 without justification, which is a breaking change. Additionally, in conf/openrouter_models.json, there are a couple of configuration issues: one new model has a max_output_tokens value that leaves no room for an input prompt, and another has a contradictory setting for temperature support. Please see my detailed comments for suggestions.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f428374514
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
- gpt-5.3-codex: supports_temperature true→false (contradicts fixed constraint) - minimax-m2.5: max_output_tokens 196608→32768 (practical default; API allows up to 196K) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Addressed Review FeedbackFixed in Temperature contradiction (gpt-5.3-codex) ✅Changed
MiniMax M2.5 max_output_tokens ✅Changed from requires-python >=3.10 — Intentional, not accidentalThis change is necessary: The project has never actually worked on 3.9 due to this transitive requirement. The gpt5pro alias cross-provider drift — AcknowledgedValid observation. |
10 files were failing the black --check CI step before this PR. None are files modified by this PR — fixing them here to unblock CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add openai/gpt-5.4 with aliases gpt5, gpt5.4, gpt-5.4 - Move gpt5 alias from openai/gpt-5 to openai/gpt-5.4 (newest base) - Keep gpt-5.0/gpt5.0 on old openai/gpt-5 for backward compat - Update tests and docs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- codex → openai/gpt-5.3-codex (was gpt-5-codex) - Old gpt-5-codex keeps codex-5.0 for backward compat Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…hiveInnovations#411) Cherry-picked model additions from BeehiveInnovations#411. New models: Claude 4.6 (Opus/Sonnet), Gemini 3.1 Pro, GPT-5.4/5.4-Pro, GPT-5.3-Codex, Devstral, DeepSeek V3.2, Qwen 3.5, MiniMax M2.5. Updated generic aliases (opus→4.6, sonnet→4.6, pro→3.1, gpt5→5.4, codex→5.3) with version-specific aliases for backward compatibility. Fixed no-API-keys test to account for ADC fallback from PR BeehiveInnovations#306.
|
ship it! 💯 |
|
@guidedways Can we get this in pls? Looking forward to using gpt 5.4 with consensus / codereview |
…im to 5 core tools Merged community PR BeehiveInnovations#411 adding GPT-5.4, Gemini 3.1 Pro, Claude Opus 4.6, Grok 4.1, and 5 other frontier models to the OpenRouter catalog. Deleted 13 tools, keeping only the 5 we use: - consensus (multi-model debate) - chat (direct model access) - debug (root cause analysis) - thinkdeep (extended reasoning) - codereview (structured code review) Removed: precommit, planner, challenge, apilookup, analyze, refactor, testgen, secaudit, docgen, tracer, version, listmodels, clink. Each replaced by existing tools in our stack (Context7, deploy hooks, EnterPlanMode, CLAUDE.md directives, PreToolUse hook). 9,757 lines removed. Clean, focused fork. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Closing — all changes from this PR are already merged into the MachineWisdomAI fork and superseded by #430 (April 2026 model refresh). |
Summary
meta-llama/llama-3-70b(8K context),perplexity/llama-3-sonar-large-32k-online(legacy)opus,sonnet,pro,gpt5pro) moved to newest model versions. Version-specific aliases preserved for backward compatibility (opus4.5,sonnet4.5,gemini3.0,gpt5.2-pro)/api/v1/models)Details
New models
anthropic/claude-opus-4.6anthropic/claude-sonnet-4.6google/gemini-3.1-pro-previewopenai/gpt-5.4-proopenai/gpt-5.3-codexdeepseek/deepseek-v3.2-expmistralai/devstral-2512qwen/qwen3.5-397b-a17bminimax/minimax-m2.5Alias migration (backward compatible)
opusopus4.5still workssonnetsonnet4.5still workspro,geminigemini3.0still worksgpt5progpt5.2-prostill worksTest plan
🤖 Generated with Claude Code